Goto

Collaborating Authors

 Pembroke Pines


Block Expanded DINORET: Adapting Natural Domain Foundation Models for Retinal Imaging Without Catastrophic Forgetting

Zoellin, Jay, Merk, Colin, Buob, Mischa, Saad, Amr, Giesser, Samuel, Spitznagel, Tahm, Turgut, Ferhat, Santos, Rui, Zhou, Yukun, Wagner, Sigfried, Keane, Pearse A., Tham, Yih Chung, DeBuc, Delia Cabrera, Becker, Matthias D., Somfai, Gabor M.

arXiv.org Artificial Intelligence

Integrating deep learning into medical imaging is poised to greatly advance diagnostic methods but it faces challenges with generalizability. Foundation models, based on self-supervised learning, address these issues and improve data efficiency. Natural domain foundation models show promise for medical imaging, but systematic research evaluating domain adaptation, especially using self-supervised learning and parameter-efficient fine-tuning, remains underexplored. Additionally, little research addresses the issue of catastrophic forgetting during fine-tuning of foundation models. We adapted the DINOv2 vision transformer for retinal imaging classification tasks using self-supervised learning and generated two novel foundation models termed DINORET and BE DINORET. Publicly available color fundus photographs were employed for model development and subsequent fine-tuning for diabetic retinopathy staging and glaucoma detection. We introduced block expansion as a novel domain adaptation strategy and assessed the models for catastrophic forgetting. Models were benchmarked to RETFound, a state-of-the-art foundation model in ophthalmology. DINORET and BE DINORET demonstrated competitive performance on retinal imaging tasks, with the block expanded model achieving the highest scores on most datasets. Block expansion successfully mitigated catastrophic forgetting. Our few-shot learning studies indicated that DINORET and BE DINORET outperform RETFound in terms of data-efficiency. This study highlights the potential of adapting natural domain vision models to retinal imaging using self-supervised learning and block expansion. BE DINORET offers robust performance without sacrificing previously acquired capabilities. Our findings suggest that these methods could enable healthcare institutions to develop tailored vision models for their patient populations, enhancing global healthcare inclusivity.


On-Demand Grandkids and Robot Pals to Keep Senior Loneliness at Bay

#artificialintelligence

At the opposite end of the country, in Pembroke Pines, Fla., 87-year-old Marilyn Sumkin uses an app called Join Papa to summon what the company calls "grandchildren on demand." College students show up for shopping, chores and chit-chat. Studies have found that loneliness is worse for health than obesity or inactivity, and is as lethal as smoking 15 cigarettes a day. It's also an epidemic: A recent study from Cigna Corp. found that about half of Americans are lonely. According to a recent Harvard University study, the cost of loneliness for Medicare is $6.7 billion a year.


Natural Language Understanding (NLU) in Fraud Risk Management – a case study

@machinelearnbot

This is a continuation of my previous blog, "Natural Language Understanding – Application Notes with Context Discriminant". Natural Language Understanding (NLU) is a subtopic of Natural Language Processing (NLP). Successful implementations of NLU are difficult because of limitations in prevailing technology. SiteFocus solved these limitations with a new approach to NLU. This approach has been successfully implemented into a commercial solution called Communications in Focus (CIF).